Convergence Estimates of Krylov Subspace Methods for the Approximation of Matrix Functions Using Tools from Potential Theory

نویسنده

  • Stefan Güttel
چکیده

File List 97 Notation 98 References 100 Index 103 2 Preface This work is about the numerical evaluation of the expression f (A)b, where A ∈ C N ×N is an arbitrary square matrix, b ∈ C N is a vector and f is a suitable matrix function. This task is of very high importance in all applied sciences since it is a generalization of the following problems, to name just a few: • Solve the linear system of equations Ax = b. The solution is x = f (A)b, where f (z) = 1/z. • Solve an ordinary differential equation y (t) = Ay (t) with given initial value y (0) = b. The solution is y (t) = f (tA)b, where f (z) = exp(z). • Solve identification problems in stochastic semigroups. Here one needs to compute f (A)b with f (z) = log(z) (see Singer, Spilermann [29]). • Simulate Brownian motion of molecules. Here one needs to determine f (A)b with f (z) = √ z (see Ericsson [9]). In the first chapter we will define the term f (A). There are different equivalent approaches. A constructive one is to involve the Jordan canonical form of the matrix A. Later we shall see that f (A) = p f,A (A), where p f,A is a polynomial of degree ≤ N − 1 that interpolates f at the eigenvalues of A. In practical applications N is very large and the spectrum of A is not known. Therefore we will determine an f-interpolating polynomial p f,m of low degree m − 1 N and hope that p f,m (A)b ≈ f (A)b. The resulting methods are called Krylov subspace methods or polynomial methods and they are considered in Chapter 2. 3 Preface The choice of the interpolation nodes for p f,m is an important issue. If the interpolation nodes are uniformly distributed on a compact subset of C, we may analyze the asymptotic convergence behavior of the arising methods using theory of interpolation and best approximation. This is done in Chapter 3. Another very popular choice of interpolation nodes are Ritz values. The resulting Arnoldi approximations converge in many cases very fast to f (A)b. To explain this, it is necessary to describe the behavior of Ritz values. In Chapter 4 we will present a theory on the convergence of Ritz values, which was mainly developed by Beckermann and …

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Preconditioned Generalized Minimal Residual Method for Solving Fractional Advection-Diffusion Equation

Introduction Fractional differential equations (FDEs)  have  attracted much attention and have been widely used in the fields of finance, physics, image processing, and biology, etc. It is not always possible to find an analytical solution for such equations. The approximate solution or numerical scheme  may be a good approach, particularly, the schemes in numerical linear algebra for solving ...

متن کامل

Superlinear convergence of the rational Arnoldi method for the approximation of matrix functions

A superlinear convergence bound for rational Arnoldi approximations to functions of matrices is derived. This bound generalizes the well-known superlinear convergence bound for the CG method to more general functions with finite singularities and to rational Krylov spaces. A constrained equilibrium problem from potential theory is used to characterize a max-min quotient of a nodal rational func...

متن کامل

Analysis of Some Krylov Subspace Methods for Normal Matrices via Approximation Theory and Convex Optimization

Krylov subspace methods are strongly related to polynomial spaces and their convergence analysis can often be naturally derived from approximation theory. Analyses of this type lead to discrete min-max approximation problems over the spectrum of the matrix, from which upper bounds of the relative Euclidean residual norm are derived. A second approach to analyzing the convergence rate of the GMR...

متن کامل

Convergence of Restarted Krylov Subspaces to Invariant Subspaces

The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the effects of polynomial restarting and impose no restrictions concerning the diagonalizability of the matrix or its degree of nonnormality. Associated with a desired se...

متن کامل

A Krylov subspace method for the approximation of bivariate matrix functions

Bivariate matrix functions provide a unified framework for various tasks in numerical linear algebra, including the solution of linear matrix equations and the application of the Fréchet derivative. In this work, we propose a novel tensorized Krylov subspace method for approximating such bivariate matrix functions and analyze its convergence. While this method is already known for some instance...

متن کامل

Convergence Analysis of Krylov Subspace Iterations with Methods from Potential Theory

Krylov subspace iterations are among the best-known and most widely used numerical methods for solving linear systems of equations and for computing eigenvalues of large matrices. These methods are polynomial methods whose convergence behavior is related to the behavior of polynomials on the spectrum of the matrix. This leads to an extremal problem in polynomial approximation theory: how small ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015